Staged Training Report ✓ Complete

Run ID: decoder_only_embed384_depth10
Generated: 2026-02-25 22:57:24
Stages Completed: 1
Total Elapsed Time: 00:04:20

Configuration

No config defaults changed since last commit.

All Staged Training Parameters (61 parameters)
ParameterValue
total_samples10000000
batch_size8
stage_samples_multiplier100000000000
update_interval250
window_size100
num_best_models_to_keep1
sampling_modeLoss-weighted
loss_weight_temperature0.5
loss_weight_refresh_interval50
stop_on_divergenceTrue
divergence_gap0.002
divergence_ratio1.5
divergence_patience50
divergence_min_updates10
val_spike_threshold2.0
val_spike_window15
val_spike_frequency0.75
val_plateau_patience250
val_plateau_min_delta0.0001
custom_lr0.0001
disable_lr_scalingTrue
custom_warmup-1
lr_min_ratio0.001
resume_warmup_ratio0.05
plateau_factor0.8
plateau_patience15
preserve_optimizerFalse
preserve_schedulerTrue
samples_modeTrain additional samples
num_random_obs_to_visualize2
selected_frame_offset3
runs_per_stage5
serial_runsTrue
clean_old_checkpointsTrue
enable_baselineFalse
baseline_runs_per_stage1
run_iddecoder_only_embed384_depth10
seedNone
enable_wandbTrue
wandb_projectdevelopmental-robot-movement
lr_sweep.lr_min1e-07
lr_sweep.lr_max0.01
lr_sweep.phase_a_num_candidates5
lr_sweep.phase_a_seeds1
lr_sweep.phase_a_time_budget_min3.0
lr_sweep.phase_a_survivor_count2
lr_sweep.phase_b_seeds3
lr_sweep.phase_b_time_budget_min10.0
lr_sweep.ranking_metricmedian_best_val
lr_sweep.min_samples_before_timeout1000
lr_sweep.min_evals_before_stop5
lr_sweep.save_sweep_stateTrue
plateau_sweep.enabledTrue
plateau_sweep.plateau_ema_alpha0.85
plateau_sweep.plateau_improvement_threshold0.0015
plateau_sweep.plateau_patience25
plateau_sweep.cooldown_updates5
plateau_sweep.max_sweeps_per_stage2
plateau_sweep.min_sweep_improvement0.0
initial_sweep_enabledTrue
stage_time_budget_min180.0
World Model Architecture (config.py)
ParameterValue
AUTOENCODER_LR0.0002
BATCH_SIZE1
CANVAS_HISTORY_SIZE3
DECODER_DEPTH8
DECODER_EMBED_DIM384
DECODER_NUM_HEADS6
DECODER_ONLY_DEPTH10
EMBED_DIM384
ENCODER_DEPTH5
FOCAL_BETA5
FOCAL_LOSS_ALPHA0.1
FRAME_SIZE(224, 224)
GRADIO_UPDATE_INTERVAL1
LR_MIN_RATIO0.001
MODEL_TYPEdecoder_only
NUM_HEADS6
PATCH_SIZE16
PERCEPTUAL_LOSS_WEIGHT0
SEPARATOR_WIDTH16
WARMUP_STEPS1000
WEIGHT_DECAY0.01
MASK_RATIO_MIN1
MASK_RATIO_MAX1
TRAIN_MASK_RATIO_MIN0.5
TRAIN_MASK_RATIO_MAX1.0

Timing Summary

Stage Plateau Sweeps Sweep Time Training Time Stage Total
Stage 1 0 00:00:00 00:00:06 00:00:06
TOTAL 0 00:00:00 00:00:06 00:00:06

Initial LR Sweep: Stage 1: selected LR 1.00e-07 in 00:01:10

Stage Results

Stage Best Loss Stop Reason Samples Trained Time Sweeps LR (Initial→Final)
Stage 1 N/A max_samples 0 00:00:06 0 1.0e-07

Total Plateau Sweeps: 0

Stop Reason Breakdown

Multi-Run Statistics

Stage 1 (5 runs)

Run Best Loss Stop Reason Samples Time Selected
1 N/A max_samples 0 00:00:06
2 N/A max_samples 0 00:00:04
3 N/A max_samples 0 00:00:04
4 N/A max_samples 0 00:00:04
5 N/A max_samples 0 00:00:04

Best Checkpoint

Name: stage1_run1_final.pth
Stage: 1
Hybrid Loss (full session): 5.501557

Learning Rate Timeline with Plateau Sweeps

Stage Progression

Stage Orig Loss Train Loss Time Samples Stop Reason
1 ⭐ 5.501557 N/A 00:00:06 0 max_samples

Hybrid Loss Over Original Session (per Stage)

Stage 1 (Best) - Hybrid Loss: 5.501557

Sample Counts

Cumulative Across All Stages

No cumulative sample count data available

Per Stage

No per-stage sample count graphs available

Best Checkpoint Inference

Selected Frame 3

Action 0

Action 1

Action 2

Random Observations

Observation 147

Action 0
Action 1
Action 2

Observation 769

Action 0
Action 1
Action 2